33 research outputs found

    Ready Both to Your and to My Hands: Mapping the Action Space of Others

    Get PDF
    To date, mutual interaction between action and perception has been investigated mainly by focusing on single individuals. However, we perceive affording objects and acts upon them in a surrounding world inhabited by other perceiving and acting bodies. Thus, the issue arises as to whether our action-oriented object perception might be modulated by the presence of another potential actor. To tackle this issue we used the spatial alignment effect paradigm and systematically examined this effect when a visually presented handled object was located close either to the perceiver or to another individual (a virtual avatar). We found that the spatial alignment effect occurred whenever the object was presented within the reaching space of a potential actor, regardless of whether it was the participant's own or the other's reaching space. These findings show that objects may afford a suitable motor act when they are ready not only to our own hand but also, and most importantly, to the other's hand. Our proposal is that this effect is likely to be due to a mapping of our own and the other's reaching space and we posit that such mapping could play a critical role in joining our own and the other's action

    Mapping Robots to Therapy and Educational Objectives for Children with Autism Spectrum Disorder

    Get PDF
    The aim of this study was to increase knowledge on therapy and educational objectives professionals work on with children with autism spectrum disorder (ASD) and to identify corresponding state of the art robots. Focus group sessions (n = 9) with ASD professionals (n = 53) from nine organisations were carried out to create an objectives overview, followed by a systematic literature study to identify state of the art robots matching these objectives. Professionals identified many ASD objectives (n = 74) in 9 different domains. State of the art robots addressed 24 of these objectives in 8 domains. Robots can potentially be applied to a large scope of objectives for children with ASD. This objectives overview functions as a base to guide development of robot interventions for these children

    The Things You Do:Internal Models of Others' Expected Behaviour Guide Action Observation

    Get PDF
    Predictions allow humans to manage uncertainties within social interactions. Here, we investigate how explicit and implicit person models-how different people behave in different situations-shape these predictions. In a novel action identification task, participants judged whether actors interacted with or withdrew from objects. In two experiments, we manipulated, unbeknownst to participants, the two actors action likelihoods across situations, such that one actor typically interacted with one object and withdrew from the other, while the other actor showed the opposite behaviour. In Experiment 2, participants additionally received explicit information about the two individuals that either matched or mismatched their actual behaviours. The data revealed direct but dissociable effects of both kinds of person information on action identification. Implicit action likelihoods affected response times, speeding up the identification of typical relative to atypical actions, irrespective of the explicit knowledge about the individual's behaviour. Explicit person knowledge, in contrast, affected error rates, causing participants to respond according to expectations instead of observed behaviour, even when they were aware that the explicit information might not be valid. Together, the data show that internal models of others' behaviour are routinely re-activated during action observation. They provide first evidence of a person-specific social anticipation system, which predicts forthcoming actions from both explicit information and an individuals' prior behaviour in a situation. These data link action observation to recent models of predictive coding in the non-social domain where similar dissociations between implicit effects on stimulus identification and explicit behavioural wagers have been reported

    Registro de movimientos oculares con el eye tracker Mobile eye XG

    Get PDF
    93 p.Debido a su importancia en la investigación sobre lo que sucede en el cerebro, el estudio sobre el sistema visual humano se ha especializado cada vez más para indagar sobre la influencia de los movimientos oculares en la percepción durante la observación. Con el fin de acceder a este tipo de procesos se ha diseñado un conjunto de herramientas que permiten hacer un seguimiento a los movimientos oculares, conocidos como eye trackers. Este libro tiene como objetivo aportar elementos para la planeación, el diseño y la ejecución de investigaciones que incluya el uso de eye trackers, en particular del eye tracker Mobile eye XG. Esta es una de las primeras revisiones en español que recopila información sobre los movimientos oculares. Contiene una descripción sobre el eye tracker Mobile eye XG y otros dispositivos; una revisión sobre la visión humana y los movimientos oculares; una reseña acerca de los determinantes cognoscitivos de los movimientos oculares; una aproximación a las condiciones para el diseño, la ejecución y el análisis de datos de las investigaciones con esta herramienta y una revisión sobre sus campos de aplicación.Technological advances in recent decades have made eye trackers, especially glasses, an important tool in the field of cognitive, emotional, and social neurosciences, due to the relationship that exists between visual behavior and neuronal processes. This has facilitated the study of a significant number of psychological processes, including perception, emotions, social cognition, decision making, attention, and literacy, among others. Eye trackers have been applied to research a wide range of human activities, including web page and application design and market studies, the visual behavior of drivers and athletes, human-computer interactions, simulations for military training, and as a support for the clinical diagnosis of personality disorders and neurological conditions. This book aims to provide elements for the planning, design, and execution of research that includes the use of eye trackers, in particular the Mobile Eye-XG eye tracker. This is one of the first reviews in Spanish that collects information on eye movements. The study contains a description of the Mobile Eye-XG eye tracker and other devices; a review of human vision and eye movements; a review of the cognitive determinants of eye movements; an exploration of the conditions that determine the design, execution, and data analysis of research that uses this tool, as well as a review of its fields of application.Introducción Parte 1. Descripción del eye tracker Mobile eye XG Parte 2. Visión humana y movimientos oculares Parte 3. Neurobiología de los movimientos oculares Parte 4. Determinantes cognoscitivos de las fijaciones y de los movimientos oculares Parte 5. Condiciones para el diseño y el registro de estudios con el eye tracker Mobile eye XG Parte 6. Análisis y representación gráfica de los datos Parte 7. Condiciones para el reporte de investigación Parte 8. Aplicaciones del eye tracking Referencias Anexo

    ELE - A Conversational Social Robot for Persons with Neuro-Developmental Disorders

    No full text
    Several studies explore the use of social robots in interventions for persons with cognitive disability. This paper describes ELE, a plush social robot with an elephant appearance that has been designed as a conversational companion for persons with Neuro-Developmental Disorders (NDD). ELE speaks through the live voice of a remote caregiver, enriching the communication through body movements. It is integrated with a tool for automatic gathering and analysis of interaction data that support therapists in monitoring the users during the experience with the robotic companion. The paper describes the design and technology of ELE and presents an empirical study that involved eleven persons with NDD using the robot at a local therapeutic center. We compared user engagement in two story-telling experiences, one with ELE and one with a face-to-face human speaker. According to our results, the participants were more engaged with ELE than with the human storyteller, which indicates, although tentatively, the engagement potential of conversational social robots for persons with NDD

    Predicting others' actions via grasp and gaze: evidence for distinct brain networks

    No full text
    Item does not contain fulltextDuring social interactions, how do we predict what other people are going to do next? One view is that we use our own motor experience to simulate and predict other people’s actions. For example, when we see Sally look at a coffee cup or grasp a hammer, our own motor system provides a signal that anticipates her next action. Previous research has typically examined such gaze and grasp-based simulation processes separately, and it is not known whether similar cognitive and brain systems underpin the perception of object-directed gaze and grasp. Here we use functional magnetic resonance imaging to examine to what extent gaze- and grasp-perception rely on common or distinct brain networks. Using a ‘peeping window’ protocol, we controlled what an observed actor could see and grasp. The actor could peep through one window to see if an object was present and reach through a different window to grasp the object. However, the actor could not peep and grasp at the same time. We compared gaze and grasp conditions where an object was present with matched conditions where the object was absent. When participants observed another person gaze at an object, left anterior inferior parietal lobule (aIPL) and parietal operculum showed a greater response than when the object was absent. In contrast, when participants observed the actor grasp an object, premotor, posterior parietal, fusiform and middle occipital brain regions showed a greater response than when the object was absent. These results point towards a division in the neural substrates for different types of motor simulation. We suggest that left aIPL and parietal operculum are involved in a predictive process that signals a future hand interaction with an object based on another person’s eye gaze, whereas a broader set of brain areas, including parts of the action observation network, are engaged during observation of an ongoing object-directed hand action

    Is gaze following purely reflexive or goal-directed instead?:revisiting the automaticity of orienting attention by gaze cues

    No full text
    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms
    corecore